Supplementary Material: Semantic Co-segmentation in Videos

نویسندگان

  • Yi-Hsuan Tsai
  • Guangyu Zhong
  • Ming-Hsuan Yang
چکیده

We analyze the proposed tracklet co-selection method based on the setting without knowing any prior knowledge on the Youtube-Objects dataset. We first evaluate the importance of facility location F(A) and unary terms U(A) in the submodular function. We show both the intersection-over-union (overlap) ratio for semantic segmentation and the average precision (AP) for classification in Table 1 under the same threshold (i.e., 0.85 as used in the manuscript). With only the facility location term that measures the object similarity, the results are less accurate caused by noisy tracklets, while the unary term can ensure the quality of selected tracklets, and hence produce better results by combining two terms. In Table 2, we show the average overlap ratio over all categories for semantic segmentation with different thresholds applying on re-ranked tracklets. Since a low threshold may result in selecting more tracklets and including more noisy ones, we also report the average F-measure for object classification. Note that we achieve the best result for both segmentation and classification with the threshold 0.75.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Semantic Co-segmentation in Videos

Discovering and segmenting objects in videos is a challenging task due to large variations of objects in appearances, deformed shapes and cluttered backgrounds. In this paper, we propose to segment objects and understand their visual semantics from a collection of videos that link to each other, which we refer to as semantic co-segmentation. Without any prior knowledge on videos, we first extra...

متن کامل

Coherent Motion Segmentation in Moving Camera Videos using Optical Flow Orientations - Supplementary material

In this supplementary material, we present some additional details and results for our paper. The sampling procedure used in our graphical model is in Section 2. The entire set of 46 orientation fields used as the “library” FOFs are presented in Section 3. Comparison of our segmentation results to various other methods is presented in Section 4. In Section 5, we compare our model to other model...

متن کامل

STD2P: RGBD Semantic Segmentation Using Spatio-Temporal Data-Driven Pooling Supplementary Material

In the supplementary material, we present the analysis of semantic boundary accurary in Section 1. In section 2, we evaluate the oracle performance on NYUDv2 40-class task with our spatio-temporal data-driven pooling. In section 3, we analyze the groundtruth annotations of the NYUDv2 40class task. In section 4, we provide the qualitative results of the semantic segmentation results of the NYUDv...

متن کامل

Supplementary Material for Video Propagation Networks

In this supplementary, we present experiment protocols and additional qualitative results for experiments on video object segmentation, semantic video segmentation and video color propagation. Table 1 shows the feature scales and other parameters used in different experiments. Figures 1, 2 show some qualitative results on video object segmentation with some failure cases in Fig. 3. Figure 4 sho...

متن کامل

Supplementary Material: Personalized Cinemagraphs using Semantic Understanding and Collaborative Learning

This is a part of the supplementary material. The contents of this supplementary material include user study information, implementation details including parameter setups, additional results for the cinemagraph generation and the human preference prediction, and supplementary tables, which have not been shown in the main paper due to the space limit. The supplementary material for resulting vi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016